Goto

Collaborating Authors

 transform and low rank representation


Metric Transforms and Low Rank Representations of Kernels for Fast Attention

Neural Information Processing Systems

We introduce a new linear-algebraic tool based on group representation theory, and use it to address three key problems in machine learning.1. Past researchers have proposed fast attention algorithms for LLMs by approximating or replace softmax attention with other functions, such as low-degree polynomials. The key property of these functions is that, when applied entry-wise to the matrix QK {\top}, the result is a low rank matrix when Q and K are n \times d matrices and n \gg d . This suggests a natural question: what are all functions f with this property? If other f exist and are quickly computable, they can be used in place of softmax for fast subquadratic attention algorithms.